kate crawford
The Foundation Model Transparency Index
Bommasani, Rishi, Klyman, Kevin, Longpre, Shayne, Kapoor, Sayash, Maslej, Nestor, Xiong, Betty, Zhang, Daniel, Liang, Percy
Foundation models have rapidly permeated society, catalyzing a wave of generative AI applications spanning enterprise and consumer-facing contexts. While the societal impact of foundation models is growing, transparency is on the decline, mirroring the opacity that has plagued past digital technologies (e.g. social media). Reversing this trend is essential: transparency is a vital precondition for public accountability, scientific innovation, and effective governance. To assess the transparency of the foundation model ecosystem and help improve transparency over time, we introduce the Foundation Model Transparency Index. The Foundation Model Transparency Index specifies 100 fine-grained indicators that comprehensively codify transparency for foundation models, spanning the upstream resources used to build a foundation model (e.g data, labor, compute), details about the model itself (e.g. size, capabilities, risks), and the downstream use (e.g. distribution channels, usage policies, affected geographies). We score 10 major foundation model developers (e.g. OpenAI, Google, Meta) against the 100 indicators to assess their transparency. To facilitate and standardize assessment, we score developers in relation to their practices for their flagship foundation model (e.g. GPT-4 for OpenAI, PaLM 2 for Google, Llama 2 for Meta). We present 10 top-level findings about the foundation model ecosystem: for example, no developer currently discloses significant information about the downstream impact of its flagship model, such as the number of users, affected market sectors, or how users can seek redress for harm. Overall, the Foundation Model Transparency Index establishes the level of transparency today to drive progress on foundation model governance via industry standards and regulatory intervention.
- South America (0.67)
- Africa (0.45)
- Europe > United Kingdom > England (0.27)
- (7 more...)
- Social Sector (1.00)
- Media (1.00)
- Information Technology > Services (1.00)
- (10 more...)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (1.00)
Weekly Top 10 Automation Articles
This Week Top Automation Articles highlights the potential of a low-code/no-code platform in Business and why Kate Crawford, writing in his book that technology experts are misunderstanding the concept of Artificial Intelligence. The introduction of the new Apple card family is really an exciting thing and why big brands like Gucci are not realizing the worth of Cryptocurrency. There is much more to explore. Let's dive into the Automation World! The potential for low-code/no-code platforms is enormous.
Hitting the Books: How IBM's metadata research made US drones even deadlier
If there's one thing the United States military gets right, it's lethality. Yet even once the US military has you in its sights, it may not know who you actually are -- such are, these so-called "signature strikes" -- even as that wrathful finger of God is called down from upon on high. As Kate Crawford, Microsoft Research principal and co-founder of the AI Now Institute at NYU, lays out in this fascinating excerpt from her new book, Atlas of AI, the military-industrial complex is alive and well and now leveraging metadata surveillance scores derived by IBM to decide which home/commute/gender reveal party to drone strike next. And if you think that same insidious technology isn't already trickling down to infest the domestic economy, I have a credit score to sell you. Excerpted from Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence by Kate Crawford, published by Yale University Press.
- Government > Regional Government > North America Government > United States Government (1.00)
- Government > Military (1.00)
Radical AI podcast: featuring Kate Crawford
Hosted by Dylan Doyle-Burke and Jessie J Smith, Radical AI is a podcast featuring the voices of the future in the field of artificial intelligence ethics. In this episode Jess and Dylan chat to Kate Crawford about the Atlas of AI. What is the Atlas of AI? How is AI an industry of extraction? How is AI impacting the planet? To answer these questions and more we welcome to the show Dr Kate Crawford to discuss Kate's new book Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence.
- North America > United States > New York (0.07)
- North America > United States > Colorado > Boulder County > Boulder (0.07)
- Information Technology > Communications > Mobile (0.69)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (0.45)
Anatomy of an AI System digital poster. - Victoria & Albert Museum - Search the Collections
The Anatomy of an AI project, created and designed by Kate Crawford and Vladan Joler, consists of a website, digital publication, physical publication and digital map. Published online in September 2018, the project responds to the technical and human infrastructure behind Amazon's voice assistant'Alexa' and the Amazon Echo device that it is enabled by. Often with the case of proprietary or closed technology such as the Amazon Alexa, it is difficult to see the infrastructure and technology behind these and other similar voice assistants. The Anatomy of an AI System project shows in detail – a result of extensive research by technology and social science researchers, as well as Joler and Crawford – the complex journey through which digitally-enabled consumer devices come to exist in society. This digital poster, to be printed at a minimum scale of 220x360 cm, is a contemporary record of obscured, inaccessible digital and corporate processes.
- North America > United States > New York (0.06)
- Europe > Serbia > Vojvodina > South Bačka District > Novi Sad (0.06)
Kate Crawford & Trevor Paglen SXSW 2019
Are Artificial Intelligence and Machine Learning really the right metaphors to address training sets that feed into automated processes? Kate Crawford and Trevor Paglen look into the production of training data and uncover the historical origins, labor practices, infrastructures, and epistemological assumptions, with biases and skews built into them from the outset. About SXSW: SXSW dedicates itself to helping creative people achieve their goals. Founded in 1987, SXSW is best known for its conference and festivals that celebrate the convergence of the interactive, film, and music industries. An essential destination for global professionals, the event features sessions, showcases, screenings, exhibitions, and a variety of networking opportunities.
Kate Crawford on AI and Power: From Bias to Justice
Machine learning systems now play a much bigger role in many of our social institutions, from education to healthcare to criminal justice. But many scholars have shown the way these systems are built on data that result in the reproduction of structural bias and discrimination. In this talk, Professor Crawford opens the substrates of training data to uncover the historical origins, labor practices, infrastructures, and epistemological assumptions that go into the production of artificial intelligence. Rather than a focus on technically correcting biases, she argues for a recentering of justice and the enforcement of limits on centralized power. Kate Crawford, Co-Founder of the AI Now Institute, is a Distinguished Research Professor at NYU and a Principal Researcher at Microsoft Research, and she is a leading scholar of the social implications of data systems, machine learning, and artificial intelligence.
- Oceania > Australia > New South Wales (0.07)
- North America > United States > New York (0.07)
- Asia (0.07)
'People fix things. Tech doesn't fix things.' – TechCrunch
Veena Dubal is an unlikely star in the tech world. A scholar of labor practices regarding the taxi and ride-hailing industries and an Associate Professor at San Francisco's U.C. Hastings College of the Law, her work on the ethics of the gig economy has been covered by the New York Times, NBC News, New York Magazine, and other publications. She's been in public dialogue with Naomi Klein and other famous authors, and penned a prominent op-ed on facial recognition tech in San Francisco -- all while winning awards for her contributions to legal scholarship in her area of specialization, labor and employment law. At the annual symposium of the AI Now Institute, an interdisciplinary research center at New York University, Dubal was a featured speaker. The symposium is the largest annual public gathering of the NYU-affiliated research group that examines AI's social implications.
- North America > United States > New York (0.47)
- North America > United States > California > San Francisco County > San Francisco (0.46)
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.05)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.05)
- Law (1.00)
- Transportation > Passenger (0.88)
- Transportation > Ground > Road (0.50)
"People fix things. Tech doesn't fix things." – TechCrunch
Veena Dubal is an unlikely star in the tech world. A scholar of labor practices regarding the taxi and ride-hailing industries and an Associate Professor at San Francisco's U.C. Hastings College of the Law, her work on the ethics of the gig economy has been covered by the New York Times, NBC News, New York Magazine, and other publications. She's been in public dialogue with Naomi Klein and other famous authors, and penned a prominent op-ed on facial recognition tech in San Francisco -- all while winning awards for her contributions to legal scholarship in her area of specialization, labor and employment law. At the annual symposium of the AI Now Institute, an interdisciplinary research center at New York University, Dubal was a featured speaker. The symposium is the largest annual public gathering of the NYU-affiliated research group that examines AI's social implications.
- North America > United States > New York (0.47)
- North America > United States > California > San Francisco County > San Francisco (0.46)
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.05)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.05)
- Law (1.00)
- Transportation > Passenger (0.88)
- Transportation > Ground > Road (0.50)
How will AI change your life? AI Now Institute founders Kate Crawford and Meredith Whittaker explain.
Ask a layman about artificial intelligence and they might point to sci-fi villains such as HAL from 2001: A Space Odyssey or the Terminator. But the co-founders of the AI Now Institute, Meredith Whittaker and Kate Crawford, want to change the conversation. Instead of talking about far-flung super-intelligent AI, they argued on the latest episode of Recode Decode, we should be talking about the ways AI is affecting people right now, in everything from education to policing to hiring. Rather than killer robots, you should be concerned about what happens to your résumé when it hits a program like the one Amazon tried to build. "They took two years to design, essentially, an AI automatic résumé scanner," Crawford said. "And they found that it was so biased against any female applicant that if you even had the word'woman' on your résumé that it went to the bottom of the pile." That's a classic example of what Crawford calls "dirty data." Even though people think of algorithms as being ...
- North America > United States > California (0.14)
- Asia > China (0.05)
- North America > United States > New York (0.04)
- (4 more...)
- Information Technology > Services (1.00)
- Information Technology > Security & Privacy (1.00)
- Health & Medicine (1.00)
- (3 more...)